140 research outputs found

    View-dependent Object Simplification and its Application in the case of Billboard Clouds

    Get PDF
    Slightly modified from the original versionNational audienceIn this Master thesis we will present a new approach to simplify a model representation based on a supplementary knowledge of a region in which the observer is allowed to move, the so-called view cell. The simplified representation should be faster to render, without loosing the similarity to the original objects. To assure this we will present our main contribution, a new error bounding method, which to our best knowledge, allows for the first time to restrain the error of a representation for a given view cell. In particular, a lot of common assumptions that were widely accepted are proven to be inexact. We will show several properties for the 3D case and solve a particular case, as well as a numerical solution for points in 3D. For the 2D case we were able to obtain an exact solution which allows our method to be applied on 2.5 dimensional scenes. Our error bounding method is then used in the context of Billboard Clouds [DDSD03]. Still, our result is more general and thus not at all restricted to this particular usage. The view-dependent Billboard Clouds that we will introduce in this master thesis have several advantages. The error of the representation can be bound and the simplification is very successful; to mention one example a 4480 triangle scene has been simplified to approximately 40 billboards (80 triangles) with a ÂĽ 5% representation error1, for a centered view cell inside of the scene with a size approximately 1/10 of the bounding diagonal. Most algorithms add ad-hoc criteria to preserve silhouettes, our algorithm preserves them automatically. Other advantages of our method are that it works for any kind of triangulated input and that it is easy to use; only two parameters are needed (simplification error and texture quality). It is completely independent of the view frustum that is used for the observer, which is not very common, as most image based view-dependent simplification methods need a fixed view frustum. Also our method does not share a common problem with most other view cell approaches, where the representation becomes worse when the observer approaches the border of the view cell. The construction of view-dependent Billboard Clouds is mostly based on the original Billboard Cloud approach [DDSD03], but some slight improvements were made with respect to the original algorithm

    Template-free Articulated Neural Point Clouds for Reposable View Synthesis

    Full text link
    Dynamic Neural Radiance Fields (NeRFs) achieve remarkable visual quality when synthesizing novel views of time-evolving 3D scenes. However, the common reliance on backward deformation fields makes reanimation of the captured object poses challenging. Moreover, the state of the art dynamic models are often limited by low visual fidelity, long reconstruction time or specificity to narrow application domains. In this paper, we present a novel method utilizing a point-based representation and Linear Blend Skinning (LBS) to jointly learn a Dynamic NeRF and an associated skeletal model from even sparse multi-view video. Our forward-warping approach achieves state-of-the-art visual fidelity when synthesizing novel views and poses while significantly reducing the necessary learning time when compared to existing work. We demonstrate the versatility of our representation on a variety of articulated objects from common datasets and obtain reposable 3D reconstructions without the need of object-specific skeletal templates. Code will be made available at https://github.com/lukasuz/Articulated-Point-NeRF

    Gigavoxels: ray-guided streaming for efficient and detailed voxel rendering

    Get PDF
    Figure 1: Images show volume data that consist of billions of voxels rendered with our dynamic sparse octree approach. Our algorithm achieves real-time to interactive rates on volumes exceeding the GPU memory capacities by far, tanks to an efficient streaming based on a ray-casting solution. Basically, the volume is only used at the resolution that is needed to produce the final image. Besides the gain in memory and speed, our rendering is inherently anti-aliased. We propose a new approach to efficiently render large volumetric data sets. The system achieves interactive to real-time rendering performance for several billion voxels. Our solution is based on an adaptive data representation depending on the current view and occlusion information, coupled to an efficient ray-casting rendering algorithm. One key element of our method is to guide data production and streaming directly based on information extracted during rendering. Our data structure exploits the fact that in CG scenes, details are often concentrated on the interface between free space and clusters of density and shows that volumetric models might become a valuable alternative as a rendering primitive for real-time applications. In this spirit, we allow a quality/performance trade-off and exploit temporal coherence. We also introduce a mipmapping-like process that allows for an increased display rate and better quality through high quality filtering. To further enrich the data set, we create additional details through a variety of procedural methods. We demonstrate our approach in several scenarios, like the exploration of a 3D scan (8192 3 resolution), of hypertextured meshes (16384 3 virtual resolution), or of a fractal (theoretically infinite resolution). All examples are rendered on current generation hardware at 20-90 fps and respect the limited GPU memory budget. This is the author’s version of the paper. The ultimate version has been published in the I3D 2009 conference proceedings.

    Quantifying spatial, temporal, angular and spectral structure of effective daylight in perceptually meaningful ways

    Full text link
    We present a method to capture the 7-dimensional light field structure, and translate it into perceptually-relevant information. Our spectral cubic illumination method quantifies objective correlates of perceptually relevant diffuse and directed light components, including their variations over time, space, in color and direction, and the environment's response to sky and sunlight. We applied it 'in the wild', capturing how light on a sunny day differs between light and shadow, and how light varies over sunny and cloudy days. We discuss the added value of our method for capturing nuanced lighting effects on scene and object appearance, such as chromatic gradients

    Fast Scene Voxelization Revisited

    Get PDF
    International audienceThis sketch paper presents an overview of ”Fast Scene Voxelization and Applications” published at the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games. It introduces slicemaps that correspond to a GPU friendly voxel representation of a scene. This voxelization is done at run-time in the order of milliseconds, even for complex and dynamic scenes containing more than 1M polygons. Creation and storage is performed on the graphics card avoiding unnecessary data transfer. Regular but also deformed grids are possible, in particular to better fit the scene geometry. Several applications are demonstrated: shadow calculation, refraction simulation and shadow volume culling/clamping

    Occlusion Textures for Plausible Soft Shadows

    Get PDF
    International audienceThis paper presents a new approach to compute plausible soft shadows for complex dynamic scenes and rectangular light sources. We estimate the occlusion at each point of the scene using prefiltered occlusion textures, which dynamically approximate the scene geometry. The algorithm is fast and its performance independent of the light's size. Being image-based, it is mostly independent of the scene complexity and type. No a priori information is needed, and there is no caster/receiver separation. This makes the method appealing and easy to use

    A Fast Geometric Multigrid Method for Curved Surfaces

    Full text link
    We introduce a geometric multigrid method for solving linear systems arising from variational problems on surfaces in geometry processing, Gravo MG. Our scheme uses point clouds as a reduced representation of the levels of the multigrid hierarchy to achieve a fast hierarchy construction and to extend the applicability of the method from triangle meshes to other surface representations like point clouds, nonmanifold meshes, and polygonal meshes. To build the prolongation operators, we associate each point of the hierarchy to a triangle constructed from points in the next coarser level. We obtain well-shaped candidate triangles by computing graph Voronoi diagrams centered around the coarse points and determining neighboring Voronoi cells. Our selection of triangles ensures that the connections of each point to points at adjacent coarser and finer levels are balanced in the tangential directions. As a result, we obtain sparse prolongation matrices with three entries per row and fast convergence of the solver.Comment: Ruben Wiersma and Ahmad Nasikun contributed equally. To be published in SIGGRAPH 2023. 16 pages total (8 main, 5 supplement), 14 figure

    Approximated and User Steerable tSNE for Progressive Visual Analytics

    Full text link
    Progressive Visual Analytics aims at improving the interactivity in existing analytics techniques by means of visualization as well as interaction with intermediate results. One key method for data analysis is dimensionality reduction, for example, to produce 2D embeddings that can be visualized and analyzed efficiently. t-Distributed Stochastic Neighbor Embedding (tSNE) is a well-suited technique for the visualization of several high-dimensional data. tSNE can create meaningful intermediate results but suffers from a slow initialization that constrains its application in Progressive Visual Analytics. We introduce a controllable tSNE approximation (A-tSNE), which trades off speed and accuracy, to enable interactive data exploration. We offer real-time visualization techniques, including a density-based solution and a Magic Lens to inspect the degree of approximation. With this feedback, the user can decide on local refinements and steer the approximation level during the analysis. We demonstrate our technique with several datasets, in a real-world research scenario and for the real-time analysis of high-dimensional streams to illustrate its effectiveness for interactive data analysis

    LightGuider: Guiding Interactive Lighting Design using Suggestions, Provenance, and Quality Visualization

    Full text link
    LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness
    • …
    corecore